Goto

Collaborating Authors

 crop field


This 'dual-use' electric tractor can sow fields and run guns

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. A Spanish startup called Voltrac says it is building a new breed of smart tractor--one that could sow fields by day and run weapons to soldiers by night. And while the Hot Wheels-looking, fully electric tractor is currently remote-controlled, Voltrac is working to make its next version fully autonomous. This "dual-use" tractor, first spotted by The Next Web, reportedly weighs 3.5 tons--roughly the size of a young African elephant--and has a carrying capacity of nearly 8,900 pounds. It can reach a top speed of around 24 miles per hour and operate for anywhere between 8 to 20 hours, thanks to two large 200kW batteries.


Multi-Region Transfer Learning for Segmentation of Crop Field Boundaries in Satellite Images with Limited Labels

Kerner, Hannah, Sundar, Saketh, Satish, Mathan

arXiv.org Artificial Intelligence

The goal of field boundary delineation is to predict the polygonal boundaries and interiors of individual crop fields in overhead remotely sensed images (e.g., from satellites or drones). Automatic delineation of field boundaries is a necessary task for many real-world use cases in agriculture, such as estimating cultivated area in a region or predicting end-of-season yield in a field. Field boundary delineation can be framed as an instance segmentation problem, but presents unique research challenges compared to traditional computer vision datasets used for instance segmentation. The practical applicability of previous work is also limited by the assumption that a sufficiently-large labeled dataset is available where field boundary delineation models will be applied, which is not the reality for most regions (especially under-resourced regions such as Sub-Saharan Africa). We present an approach for segmentation of crop field boundaries in satellite images in regions lacking labeled data that uses multi-region transfer learning to adapt model weights for the target region. We show that our approach outperforms existing methods and that multi-region transfer learning substantially boosts performance for multiple model architectures. Our implementation and datasets are publicly available to enable use of the approach by end-users and serve as a benchmark for future work.


Combining Deep Learning and Street View Imagery to Map Smallholder Crop Types

Soler, Jordi Laguarta, Friedel, Thomas, Wang, Sherrie

arXiv.org Artificial Intelligence

Accurate crop type maps are an essential source of information for monitoring yield progress at scale, projecting global crop production, and planning effective policies. To date, however, crop type maps remain challenging to create in low and middle-income countries due to a lack of ground truth labels for training machine learning models. Field surveys are the gold standard in terms of accuracy but require an often-prohibitively large amount of time, money, and statistical capacity. In recent years, street-level imagery, such as Google Street View, KartaView, and Mapillary, has become available around the world. Such imagery contains rich information about crop types grown at particular locations and times. In this work, we develop an automated system to generate crop type ground references using deep learning and Google Street View imagery. The method efficiently curates a set of street view images containing crop fields, trains a model to predict crop type by utilizing weakly-labelled images from disparate out-of-domain sources, and combines predicted labels with remote sensing time series to create a wall-to-wall crop type map. We show that, in Thailand, the resulting country-wide map of rice, cassava, maize, and sugarcane achieves an accuracy of 93%. We publicly release the first-ever crop type map for all of Thailand 2022 at 10m-resolution with no gaps. To our knowledge, this is the first time a 10m-resolution, multi-crop map has been created for any smallholder country. As the availability of roadside imagery expands, our pipeline provides a way to map crop types at scale around the globe, especially in underserved smallholder regions.


Productive Crop Field Detection: A New Dataset and Deep Learning Benchmark Results

Nascimento, Eduardo, Just, John, Almeida, Jurandy, Almeida, Tiago

arXiv.org Artificial Intelligence

In precision agriculture, detecting productive crop fields is an essential practice that allows the farmer to evaluate operating performance separately and compare different seed varieties, pesticides, and fertilizers. However, manually identifying productive fields is often a time-consuming and error-prone task. Previous studies explore different methods to detect crop fields using advanced machine learning algorithms, but they often lack good quality labeled data. In this context, we propose a high-quality dataset generated by machine operation combined with Sentinel-2 images tracked over time. As far as we know, it is the first one to overcome the lack of labeled samples by using this technique. In sequence, we apply a semi-supervised classification of unlabeled data and state-of-the-art supervised and self-supervised deep learning methods to detect productive crop fields automatically. Finally, the results demonstrate high accuracy in Positive Unlabeled learning, which perfectly fits the problem where we have high confidence in the positive samples. Best performances have been found in Triplet Loss Siamese given the existence of an accurate dataset and Contrastive Learning considering situations where we do not have a comprehensive labeled dataset available.


A.I. turns 57 million crop fields into stunning abstract art

#artificialintelligence

This is where precision farming meets abstract art. OneSoil, an agritech start-up from Belarus, has just launched an interactive digital map of crop data for more than 57 million fields across the U.S. and Europe. The map provides detailed information on various crop types in 43 countries collected over the past three years, allowing users to see how fields have changed from 2016 to 2018. The OneSoil map makes local and global trends in crop production available to everyone with a stake in farming. In so doing, it helps predict market performance of these crops, and aids decision-making by farmers and traders.


Robotic peregrine falcon can scare birds away from crop fields

New Scientist

A flying robot inspired by a male peregrine falcon can scare away flocks of birds in fields within 5 minutes of flying over and keep them away for up to four hours, on average. Birds can eat crops on farmland or damage aircraft at airports if they collide with them by accident. As a result, several methods have been developed to deter them from congregating at these sites. These include traditional scarecrows, recordings of bird distress calls or lethal approaches involving guns or trained birds of prey.


Visualizing the diversity of representations learned by Bayesian neural networks

Grinwald, Dennis, Bykov, Kirill, Nakajima, Shinichi, Höhne, Marina M. -C.

arXiv.org Machine Learning

Explainable artificial intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks. In this work, we investigate how XAI methods can be used for exploring and visualizing the diversity of feature representations learned by Bayesian neural networks (BNNs). Our goal is to provide a global understanding of BNNs by making their decision-making strategies a) visible and tangible through feature visualizations and b) quantitatively measurable with a distance measure learned by contrastive learning. Our work provides new insights into the posterior distribution in terms of human-understandable feature information with regard to the underlying decision-making strategies. Our main findings are the following: 1) global XAI methods can be applied to explain the diversity of decision-making strategies of BNN instances, 2) Monte Carlo dropout exhibits increased diversity in feature representations compared to the multimodal posterior approximation of MultiSWAG, 3) the diversity of learned feature representations highly correlates with the uncertainty estimates, and 4) the inter-mode diversity of the multimodal posterior decreases as the network width increases, while the intra-mode diversity increases. Our findings are consistent with the recent deep neural networks theory, providing additional intuitions about what the theory implies in terms of humanly understandable concepts.


10 Hard Lessons Learned For Creating a Dataset in Our Crops Identification Challenge to Fight Hunger

#artificialintelligence

This article is the result of working in Omdena's AI challenge to estimate crops yield with the UN World Food Program in Nepal. The problem was tough, challenges were huge, and resources scarce. Still, a community of 36 collaborators managed to build a solution with 89% accuracy. This article details the problems we had to solve and lessons learned in creating an appropriate dataset. It's estimated, that every day 821 million people -- one in nine -- still go to bed on an empty stomach each night.


Towards a Decentralized, Autonomous Multiagent Framework for Mitigating Crop Loss

Ceren, Roi, Quinn, Shannon, Raines, Glen

arXiv.org Artificial Intelligence

We propose a generalized decision-theoretic system for a heterogeneous team of autonomous agents who are tasked with online identification of phenotypically expressed stress in crop fields.. This system employs four distinct types of agents, specific to four available sensor modalities: satellites (Layer 3), uninhabited aerial vehicles (L2), uninhabited ground vehicles (L1), and static ground-level sensors (L0). Layers 3, 2, and 1 are tasked with performing image processing at the available resolution of the sensor modality and, along with data generated by layer 0 sensors, identify erroneous differences that arise over time. Our goal is to limit the use of the more computationally and temporally expensive subsequent layers. Therefore, from layer 3 to 1, each layer only investigates areas that previous layers have identified as potentially afflicted by stress. We introduce a reinforcement learning technique based on Perkins' Monte Carlo Exploring Starts for a generalized Markovian model for each layer's decision problem, and label the system the Agricultural Distributed Decision Framework (ADDF). As our domain is real-world and online, we illustrate implementations of the two major components of our system: a clustering-based image processing methodology and a two-layer POMDP implementation.


The Supercomputer That Won Jeopardy Is Now Helping California Save Water

#artificialintelligence

To get information about water use, Watson uses "visual recognition" to scan images of land parcels for valuable information, according to Pesenti. But unlike less-powerful image detection software, Watson doesn't just identify a specific object -- say, a crop field -- in an image. Instead, it combs through lots and lots of information about the image -- like the objects it contains and the colors of those objects -- and uses that information to "understand" the image as a whole. In the case of OmniEarth, researchers can use Watson not just to determine if a given parcel of land contains a crop field, but also to calculate the exact amount of water used by that parcel based on all of the information contained in the photo. What's more, Watson doesn't need to know much about water consumption to tell OmniEarth if people are using too much water.